12 research outputs found

    Quantum-classical tradeoffs and multi-controlled quantum gate decompositions in variational algorithms

    Full text link
    Quantum algorithms for unconstrained optimization problems, such as the Quantum Approximate Optimization Algorithm (QAOA), have been proposed as interesting near-term algorithms which operate under a hybrid quantum-classical execution model. Recent work has shown that the QAOA can also be applied to constrained combinatorial optimization problems by incorporating the problem constraints within the design of the variational ansatz - often resulting in quantum circuits containing many multi-controlled gate operations. This paper investigates potential resource tradeoffs for the QAOA when applied to the particular constrained optimization problem of Maximum Independent Set. We consider three variants of the QAOA which make different tradeoffs between the number of classical parameters, quantum gates, and iterations of classical optimization. We also study the quantum cost of decomposing the QAOA circuits on hardware which may support different qubit technologies and native gate sets, and compare the different algorithms using the gate decomposition score which combines the fidelity of the gate operations with the efficiency of the decomposition into a single metric. We find that all three QAOA variants can attain similar performance but the classical and quantum resource costs may vary greatly between them.Comment: 17 pages, 8 figures, 5 table

    Coreset Clustering on Small Quantum Computers

    Full text link
    Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigate using this paradigm to perform kk-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We compare the performance of this approach to classical kk-means clustering both numerically and experimentally on IBM Q hardware. We are able to find data sets where coresets work well relative to random sampling and where QAOA could potentially outperform standard kk-means on a coreset. However, finding data sets where both coresets and QAOA work well--which is necessary for a quantum advantage over kk-means on the entire data set--appears to be challenging

    Architectures for Multinode Superconducting Quantum Computers

    Full text link
    Many proposals to scale quantum technology rely on modular or distributed designs where individual quantum processors, called nodes, are linked together to form one large multinode quantum computer (MNQC). One scalable method to construct an MNQC is using superconducting quantum systems with optical interconnects. However, a limiting factor of these machines will be internode gates, which may be two to three orders of magnitude noisier and slower than local operations. Surmounting the limitations of internode gates will require a range of techniques, including improvements in entanglement generation, the use of entanglement distillation, and optimized software and compilers, and it remains unclear how improvements to these components interact to affect overall system performance, what performance from each is required, or even how to quantify the performance of each. In this paper, we employ a `co-design' inspired approach to quantify overall MNQC performance in terms of hardware models of internode links, entanglement distillation, and local architecture. In the case of superconducting MNQCs with microwave-to-optical links, we uncover a tradeoff between entanglement generation and distillation that threatens to degrade performance. We show how to navigate this tradeoff, lay out how compilers should optimize between local and internode gates, and discuss when noisy quantum links have an advantage over purely classical links. Using these results, we introduce a roadmap for the realization of early MNQCs which illustrates potential improvements to the hardware and software of MNQCs and outlines criteria for evaluating the landscape, from progress in entanglement generation and quantum memory to dedicated algorithms such as distributed quantum phase estimation. While we focus on superconducting devices with optical interconnects, our approach is general across MNQC implementations.Comment: 23 pages, white pape

    Co-designing Quantum Computer Architectures and Algorithms to Bridge the Quantum Resource Gap

    No full text
    Quantum computing is a new computational paradigm based on the laws of quantum physics that have been developed over the last century. Quantum computers (QCs) manipulate quantum states and exploit non-classical phenomena, such as superposition and entanglement, to perform computations. Given this computational model, many quantum algorithms have been developed which are theoretically capable of outperforming any classical computer for certain applications such as factoring large integers, optimization, and simulating highly entangled quantum systems. However, the quantum programs implementing these high impact applications are extremely resource demanding. Their time and space requirements outstrip the capabilities of current QC systems by many orders of magnitude. I refer to this mismatch between the resources demanded by applications and what is available on current hardware as the Quantum Resource Gap (QRG). This dissertation presents a strategy for overcoming the QRG by advocating for the design of domain-specific quantum accelerators. I discuss how this strategy may be pursued using quantum benchmarks and program profiling to identify matches between applications and architectures that are well suited to one another. Once a particular application-architecture match is found, the algorithm's execution can be optimized with cross-layer, co-design techniques that incorporate relevant information from across the entire hardware-software stack. To demonstrate the advantages of this approach, I discuss three examples covering molecular simulation, data set clustering, and constrained combinatorial optimization applications

    Coreset Clustering on Small Quantum Computers

    No full text
    Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging

    CutQC: using small Quantum computers for large Quantum circuit evaluations

    No full text
    Quantum computing (QC) is a new paradigm offering the potential of exponential speedups over classical computing for certain computational problems. Each additional qubit doubles the size of the computational state space available to a QC algorithm. This exponential scaling underlies QC’s power, but today’s Noisy Intermediate-Scale Quantum (NISQ) devices face significant engineering challenges in scalability. The set of quantum circuits that can be reliably run on NISQ devices is limited by their noisy operations and low qubit counts. This paper introduces CutQC, a scalable hybrid computing approach that combines classical computers and quantum computers to enable evaluation of quantum circuits that cannot be run on classical or quantum computers alone. CutQC cuts large quantum circuits into smaller subcircuits, allowing them to be executed on smaller quantum devices. Classical postprocessing can then reconstruct the output of the original circuit. This approach offers significant runtime speedup compared with the only viable current alternative—purely classical simulations—and demonstrates evaluation of quantum circuits that are larger than the limit of QC or classical simulation. Furthermore, in real-system runs, CutQC achieves much higher quantum circuit evaluation fidelity using small prototype quantum computers than the state-of-the-art large NISQ devices achieve. Overall, this hybrid approach allows users to leverage classical and quantum computing resources to evaluate quantum programs far beyond the reach of either one alone
    corecore